skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Vaswani, Namrata"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. This monograph describes a novel optimization solution framework, called alternating gradient descent (GD) and minimization (AltGDmin), that is useful for many problems for which alternating minimization (AltMin) is a popular solution. AltMin is a special case of the block coordinate descent algorithm that is useful for problems in which min- imization w.r.t one subset of variables keeping the other fixed is closed form or otherwise reliably solved. Denote the two blocks/subsets of the optimization variables Z by Zslow, Zfast, i.e., Z = {Zslow, Zfast}. AltGDmin is often a faster solution than AltMin for any problem for which (i) the minimization over one set of variables, Zfast, is much quicker than that over the other set, Zslow; and (ii) the cost function is differentiable w.r.t. Zslow. Often, the reason for one minimization to be quicker is that the problem is “decou- pled” for Zfast and each of the decoupled problems is quick to solve. This decoupling is also what makes AltGDmin communication-efficient for federated settings. Important examples where this assumption holds include (a) low rank column-wise compressive sensing (LRCS), low rank matrix completion (LRMC), (b) their outlier-corrupted extensions such as robust PCA, robust LRCS and robust LRMC; (c) phase retrieval and its sparse and low-rank model based extensions; (d) tensor extensions of many of these problems such as tensor LRCS and tensor completion; and (e) many partly discrete problems where GD does not apply – such as clustering, unlabeled sensing, and mixed linear regression. LRCS finds important applications in multi-task representation learning and few shot learning, federated sketching, and accelerated dynamic MRI. LRMC and robust PCA find important applications in recommender systems, computer vision and video analytics. 
    more » « less
    Free, publicly-accessible full text available January 1, 2026
  2. In this work, we develop and analyze a novel Gradient Descent (GD) based solution, called Alternating GD and Minimization (AltGDmin), for efficiently solving the low rank matrix completion (LRMC) in a federated setting. Here “efficient” refers to communication-, computation- and sample- efficiency. LRMC involves recovering an n × q rank-r matrix X⋆ from a subset of its entries when r ≪ min(n, q). Our theoretical bounds on the sample complexity and iteration complexity of AltGDmin imply that it is the most communication-efficient solution while also been one of the most computation- and sample- efficient ones. We also extend our guarantee to the noisy LRMC setting. In addition, we show how our lemmas can be used to provide an improved sample complexity guarantee for the Alternating Minimization (AltMin) algorithm for LRMC. AltMin is one of the fastest centralized solutions for LRMC; with AltGDmin having comparable time cost even for the centralized setting. 
    more » « less
    Free, publicly-accessible full text available January 1, 2026
  3. This work considers two related learning problems in a federated attack-prone setting – federated principal com- ponents analysis (PCA) and federated low rank column-wise sensing (LRCS). The node attacks are assumed to be Byzan- tine which means that the attackers are omniscient and can collude. We introduce a novel provably Byzantine-resilient communication-efficient and sample-efficient algorithm, called Subspace-Median, that solves the PCA problem and is a key part of the solution for the LRCS problem. We also study the most natural Byzantine-resilient solution for federated PCA, a geometric median based modification of the federated power method, and explain why it is not useful. Our second main contribution is a complete alternating gradient descent (GD) and minimization (altGDmin) algorithm for Byzantine-resilient horizontally federated LRCS and sample and communication complexity guarantees for it. Extensive simulation experiments are used to corroborate our theoretical guarantees. The ideas that we develop for LRCS are easily extendable to other LR recovery problems as well. 
    more » « less
    Free, publicly-accessible full text available November 1, 2025
  4. This note provides a significantly simpler and shorter proof of our sample complexity guarantee for solving the low rank column-wise sensing problem using the Alternating Gradient Descent (GD) and Minimization (AltGDmin) algorithm. AltGDmin was developed and analyzed for solving this problem in our recent work. We also provide an improved guarantee. 
    more » « less
  5. This work introduces a Byzantine resilient so- lution for learning low-dimensional linear rep- resentation. Our main contribution is the de- velopment of a provably Byzantine-resilient Alt- GDmin algorithm for solving this problem in a federated setting. We argue that our solution is sample-efficient, fast, and communication- efficient. In solving this problem, we also intro- duce a novel secure solution to the federated sub- space learning meta-problem that occurs in many different applications. 
    more » « less
  6. This work introduces a Byzantine resilient so- lution for learning low-dimensional linear rep- resentation. Our main contribution is the de- velopment of a provably Byzantine-resilient Alt- GDmin algorithm for solving this problem in a federated setting. We argue that our solution is sample-efficient, fast, and communication- efficient. In solving this problem, we also intro- duce a novel secure solution to the federated sub- space learning meta-problem that occurs in many different applications. 
    more » « less
  7. We study how representation learning can im- prove the learning efficiency of contextual bandit problems. We study the setting where we play T contextual linear bandits with dimension d si- multaneously, and these T bandit tasks collec- tively share a common linear representation with a dimensionality of r ≪ d. We present a new algorithm based on alternating projected gradi- ent descent (GD) and minimization estimator to recover a low-rank feature matrix. Using the pro- posed estimator, we present a multi-task learning algorithm for linear contextual bandits and prove the regret bound of our algorithm. We presented experiments and compared the performance of our algorithm against benchmark algorithms 
    more » « less
  8. This work studies our recently developed algorithm, decentralized alternating projected gradient descent algorithm (Dec-AltGDmin), for recovering a low rank (LR) matrix from independent columnwise linear projections in a decentralized setting. This means that the observed data is spread across L agents and there is no central coordinating node. Since this problem is non-convex and since it involves a subspace recovery step, most existing literature from decentralized optimization is not useful. We demonstrate using extensive numerical simulations and communication, time, and sample complexity comparisons that (i) existing decentralized gradient descent (GD) approaches fail, and (ii) other common solution approaches on LR recovery literature – projected GD, alternating GD and alternating minimization (AltMin) – either have a higher communication (and time) complexity or a higher sample complexity. Communication complexity is often the most important concern in decentralized learning. 
    more » « less